visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,221 Posts
Sort by:
Post not yet marked as solved
0 Replies
19 Views
I'm creating a full immersive app of a large 3d environment in which I need to be able to move the player with different options like, hand gestures, game controller and teleporting. I have worked with unreal engine in which moving the player is easy and well documented. But I have not been able to find any information on how I could achieve this in visionOS. Has anyone done something similar that could give me some advice or sample code? any help appreciated Guillermo
Posted
by gl5.
Last updated
.
Post not yet marked as solved
5 Replies
473 Views
Hello everyone! For applications with multiple windows, if I close all open windows, I've noticed that visionOS reopen only the last closed window when launching the app again from the menu. Is there a way I can set a main window that will always be open when the application is launched?
Posted
by kentvchr.
Last updated
.
Post not yet marked as solved
0 Replies
32 Views
We want to overlay a SwiftUI attachment on a RealityView, like it is done in the Diorama sample. By default, the attachments seem to be placed centered at their position. However, for our use-case we need to set a different anchor point, so the attachment is always aligned to one of the corners of the attachment view, e.g. the lower left should be aligned with the attachment's position. Is this possible?
Posted
by waldgeist.
Last updated
.
Post marked as solved
4 Replies
84 Views
New to Apple development. Vision Pro is the reason I got a developer license and am learning XCode, SwiftUI .... The Vision Pro tutorials seem to use WIFI or the developer strap to connect the Development environment to the Vision Pro. I have the developer strap, but can't use it on my company computer. I have been learning using the developer tools, but I can't test the apps on my personal Vision Pro. Is there a way to generate an app file on the Mac Book that I can download to the Vision Pro? This would be a file that I could transfer to cloud storage and download using Safari to the Vision Pro. I will eventually get a Vision Pro at work, but till then I want to start developing.
Posted Last updated
.
Post not yet marked as solved
0 Replies
44 Views
As the title says. While I can find the video capture on the desktop but I can not find where it is storing the screenshots even when it says Screenshot's succeeded. I am referencing this: https://developer.apple.com/documentation/visionos/capturing-screenshots-and-video-from-your-apple-vision-pro-for-2d-viewing
Posted Last updated
.
Post not yet marked as solved
1 Replies
74 Views
I have some usdz files saved and I would like to make thumbnails for them in 2D of course. I was checking Creating Quick Look Thumbnails to Preview Files in Your App but it says Augmented reality objects using the USDZ file format (iOS and iPadOS only) I would like to have the same functionality in my visionOS app. How can I do that? I thought about using some api to convert 3d asset into 2d asset, but it would be better If I could do that inside the Swift environment. Basically I wanna do Image(uiImage: "my_usdz_file")
Posted
by Hygison.
Last updated
.
Post not yet marked as solved
0 Replies
39 Views
I have the Vision Pro developer strap, do I need to do anything to make Instruments transfer the data over it rather than wifi? Or will it do that automatically? It seems incredibly slow for transferring and then analysing data. I can see the Vision Pro recognised in Configurator, so assume it's working. Otherwise.. Any tips for speeding up Instruments? Capturing 5 mins of gameplay (high-freq) then takes 30-40+ mins to appear in Instruments on an M2 Max 32gb. Thanks!
Posted
by yezz.
Last updated
.
Post not yet marked as solved
0 Replies
27 Views
I have a unity scene which i have created for vision pro and i have also created a biomatric authentication application for vision os using Xcode and swift. What i want to do is call unity scene after the authentication has taken place form the xcode. now i have seen medium post but it only shows how we can do that for apps, I am not bale to do that for vision Pro I have followed this post : https://medium.com/mop-developers/launch-a-unity-game-from-a-swiftui-ios-app-11a5652ce476 All this i am doing because as far as i know Apple vision pro is not currently supporting optic id authentication with unity's polyspatial plugin. Any help on this will be appreciated. Thank you in advace.
Posted
by snusharma.
Last updated
.
Post not yet marked as solved
1 Replies
87 Views
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see. Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback. I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes. (It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
Posted
by waldgeist.
Last updated
.
Post not yet marked as solved
1 Replies
113 Views
Hi there. I've been trying to take a snapshot programmatically on apple vision pro but haven't succeeded. This is the code I am using so far: func takeSnapshot<Content: View>(of view: Content) -> UIImage? { var image: UIImage? uiQueue.sync { let controller = UIHostingController(rootView: view) controller.view.bounds = UIScreen.main.bounds let renderer = UIGraphicsImageRenderer(size: controller.view.bounds.size) image = renderer.image { context in controller.view.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true) } } return image } However, UIScreen is unavailable on visionOS. Any idea of how I can achieve this? Thanks Oscar
Posted
by ojrlopez.
Last updated
.
Post marked as solved
1 Replies
67 Views
I built two parts of my app a bit disjointed: my physics component, which controls all SceneReconstruction, HandTracking, and WorldTracking. my spatial GroupActivities component that allows you to see personas of those that join the activity. My problem: When trying to use any DataProvider in a spatial experience, I get the ARKit Session Event: dataProviderStateChanged, which disables all of my providers. My question: Has anyone successfully been able to find a workaround for this? I think it would be amazing to have one user be able to be the "host" for the activity and the scene reconstruction provider still continue to run for them.
Posted Last updated
.
Post not yet marked as solved
1 Replies
104 Views
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
Posted Last updated
.
Post not yet marked as solved
0 Replies
86 Views
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene. I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious. When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene). Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene. Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful. struct RKMemTestApp: App { private var viewModel = ViewModel() var body: some Scene { WindowGroup { ContentView() .environment(viewModel) } ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environment(viewModel) } } } Add this above the body in ContentView: @Environment(ViewModel.self) private var viewModel The ContentView body should be: VStack { Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace) .font(.title) .frame(width: 360) .padding(24) .glassBackgroundEffect() Button("Load Models") { viewModel.loadModels() } Button("Unload Models") { viewModel.unloadModels() } } ImmersiveView: struct ImmersiveView: View { @Environment(ViewModel.self) private var viewModel var body: some View { RealityView { content in if let rootEntity = viewModel.rootEntity { content.add(rootEntity) } } update: { content in if viewModel.rootEntity == nil && !content.entities.isEmpty { content.entities.removeAll() } else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty { content.add(rootEntity) } } } } ViewModel: import Foundation import Observation import RealityKit import RealityKitContent @Observable class ViewModel { var rootEntity: Entity? init() { } func loadModels() { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { Task { @MainActor in if rootEntity == nil { rootEntity = Entity() } rootEntity!.addChild(scene) } } } /*if rootEntity == nil { rootEntity = Entity() } for _ in 0..<1000 { let mesh = MeshResource.generateSphere(radius:0.1) let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)] rootEntity!.addChild(entity) }*/ } func unloadModels() { rootEntity?.children.removeAll() rootEntity?.removeFromParent() rootEntity = nil } }
Posted
by KGraus.
Last updated
.
Post marked as solved
1 Replies
78 Views
I am trying to launch openImmersiveSpace, but seem like there is an issue with the openImmersiveSpace Task. Error: Static method 'buildExpression' requires that 'Task<OpenImmersiveSpaceAction.Result, Never>' conform to 'View' Here is the code and the error shows up on the "Task" line. import SwiftUI import RealityKit import RealityKitContent struct TestView: View { @Environment(\.openImmersiveSpace) var openImmersiveSpace @Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace var body: some View { VStack{ Text("Open Full Immersive & switch to NextViewArea") NavigationLink { Task { await openImmersiveSpace(id: "ImmersiveSpace") } NextViewArea() } label:{ Label(" Enter Full Immersive Space") } } } } How can I move onto the next view area in the floating window while also launching full immersive space. Any help would be much appreciated.
Posted Last updated
.
Post not yet marked as solved
4 Replies
570 Views
Hi, My app launches with a mixed immersive space. the Preferred Default Scene Session Role is set to Immersive Space Application Session Role. ImmersiveSpace(id: "sceneSpace"){ ImmersiveView() .environmentObject(modelObject) }.immersionStyle(selection: .constant(.mixed), in: .mixed) Other WindowGroups are opened too. Problem: When the x button (bottom left corner) is tapped on any WindowGroup the immersive space is dismissed. When the user opens the app again the immersive space is gone. The same happens when the user opens the Home Screen. How can I keep the same immersive space when the app is opened again. Thank you!
Posted
by gebs.
Last updated
.
Post not yet marked as solved
0 Replies
66 Views
Hi, guys. I am preparing to develop a Vision Pro app with Unity. The Play to Device, which connects Unity Engine and Vision Pro, worked well, and there was no problem with the connection with Vision Pro simulator. But when I tried to connect Xcode and Vision Pro, I couldn't see Vision Pro itself in the device list. (The iPhone 11, which was wired as a test, recognizes well.) I looked up the forum and it was simple to connect. The link to the post I found is below. https://forums.developer.apple.com/forums/thread/746464 I don't know why it's not working even though I look up YouTube. Leaving my work environment, I'd appreciate it if you could leave a helpful answer. MacBook : M2 MacBook Xcode Ver. : 15.3 VisionPro Ver. : 1.1.2 Developer accounts: All use the same Apple developer account
Posted Last updated
.
Post not yet marked as solved
1 Replies
119 Views
I need to obtain data through mqtt and subscription? Is there any idea or framework ? Think you
Posted
by junp.
Last updated
.
Post not yet marked as solved
0 Replies
122 Views
Environment Apple Silicon M1 Pro macOS 14.4 Xcode 15.3 (15E204a) visionOS simulator 1.1 Step Create a new visionOS app project and compile it through xcodebuild: xcodebuild -destination "generic/platform=visionOS" It fails on RealityAssetsCompile with log : error: Failed to find newest available Simulator runtime But if I open the Xcode IDE and start building, it works fine. This error only occurs in xcodebuild. More I noticed that in xcrun simctl list the vision pro simulator is in unavailable state: -- visionOS 1.1 -- Apple Vision Pro (6FB1310A-393E-4E82-9F7E-7F6D0548D136) (Booted) (unavailable, device type profile not found) And i can't find the vision pro device type in xcrun simctl list devicetypes, does it matter? I have tried to completely reinstall Xcode and simulator runtime, but still the same error.
Posted
by LZephyr.
Last updated
.
Post not yet marked as solved
3 Replies
166 Views
Today I have tried to add a second archive action for visionOS. I had added a visionOS destination to my app target a while back and can build and archive my app for visionOS in Xcode 15.3 locally, and also run it on the device. Xcode Cloud is giving me the following errors in the Archive - visionOS action (Archive - iOS works): Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle MyApp.app is invalid. Invalid sdk value. The value provided for the sdk portion of LC_BUILD_VERSION in MyApp.app/MyApp is 17.4 which is greater than the maximum allowed value of 1.2. This bundle is invalid. The value provided for the key MinimumOSVersion '17.0' is not acceptable. Type Mismatch. The value for the Info.plist key CFBundleIcons.CFBundlePrimaryIcon is not of the required type for that key. See the Information Property List Key Reference at https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40009248-SW1 All 4 errors are annotated with "Prepare Build for App Store Connect" and I get them for both "TestFlight (Internal Testing Only)" and "TestFlight and App Store" deployment preparation options. I have tried to remove the visionOS destination and add it back, but this is not changing the project at all. Any ideas what I am missing?
Posted
by RK123.
Last updated
.
Post not yet marked as solved
0 Replies
110 Views
Hi Guys, I would like to ask if anyone knows the FPS of screen recording and airplay on Vision Pro. Airplay refers to mirroring the Vision Pro view to MacBook/iPhone/iPad. Also, is there any way to record the screen with the raw FPS of Vision Pro (i.e., 90)?
Posted
by felixYS.
Last updated
.